11 research outputs found

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    IT Lightning Talks: session #5

    No full text
    Interested in working on building an immersive, panoramic virtual visit of the CERN Data Centre? We'll show the system used during CERN Open Days, its limitations and plans on how to take it to the next level as a permanent installation to be used by visitors from around the world

    CERN IT Consultantancy team 2018

    No full text
    CERN IT Consultants from top left to bottom right: Eduardo ALVAREZ FERNANDEZ, Vincent BIPPUS, Xavier ESPINAL CURULL, Arash KHODABANDEH, Veronique LEFEBURE, Sebastian LOPIENSKI, Ignacio REGUERO, Jaroslava SCHOVANCOVÁ, Bruno SILVA DE SOUSA, and Liviu VALSAN

    Summary of the HEPiX autumn meeting

    No full text
    The HEPiX Fall 2014 meeting is taking place at University of Nebraska-Lincoln. The HEPiX forum brings together worldwide Information Technology staff, including system administrators, system engineers, and managers from the High Energy Physics and Nuclear Physics laboratories and institutes, to foster a learning and sharing experience between sites facing scientific computing and data challenges. Participating sites include BNL, CERN, DESY, FNAL, IN2P3, INFN, JLAB, NIKHEF, RAL, SLAC, TRIUMF and many others

    Comparison of Software Technologies for Vectorization and Parallelization

    No full text
    This paper demonstrates how modern software development methodologies can be used to give an existing sequential application a considerable performance speed-up on modern x86 server systems. Whereas, in the past, speed-up was directly linked to the increase in clock frequency when moving to a more modern system, current x86 servers present a plethora of “performance dimensions” that need to be harnessed with great care. The application we used is a real-life data analysis example in C++ analyzing High Energy Physics data. The key software methods used are OpenMP, Intel Threading Building Blocks (TBB), Intel Cilk Plus, and the auto-vectorization capability of the Intel compiler (Composer XE). Somewhat surprisingly, the Message Passing Interface (MPI) is successfully added, although our focus is on single-node rather than multi-node performance optimization. The paper underlines the importance of algorithmic redesign in order to optimize each performance dimension and links this to close control of the memory layout in a thread-safe environment. The data fitting algorithm at the heart of the application is very floating-point intensive so the paper also discusses how to ensure optimal performance of mathematical functions (in our case, the exponential function) as well as numerical correctness and reproducibility. The test runs on single-, dual-, and quad-socket servers show first of all that vectorization of the algorithm (with either auto-vectorization by the compiler or the use of Intel Cilk Plus Array Notation) gives more than a factor 2 in speed-up when the data layout in memory is properly optimized. Using coarse-grained parallelism all three approaches (OpenMP, Cilk Plus, and TBB) showed good parallel speed-up on the available CPU cores. The best one was obtained with OpenMP, but by combining Cilk Plus and TBB with MPI in order to tie processes to sockets, these two software methods nicely closed the gap and TBB came out with a slight advantage in the end. Overall, we conclude that the best implementation in terms of both ease of implementation and the resulting performance is a combination of the Intel Cilk Plus Array Notation for vectorization and a hybrid TBB and MPI approach for parallelization

    Data Centre Technology and Market Trends

    No full text
    In this ITTF session we will provide an overview of data center technologies and market trends in the fields of server processors, memory architectures, server platforms, storage technology (both solid state and spinning media), Intel future roadmaps, Open Compute Project hardware and server-side networking. We will begin with a peek into the evolution of processors over the last 40+ years and provide an outlook into future processor trends. The highlights of the most recent Intel server processor generation (Xeon E5-2600 v3, Haswell-EP) will be presented together with the specifics of the new generation of DDR memory technology employed. Alternative processor architectures from contenders like ARM Holdings (with their AArch64 architecture) and IBM (with their OpenPOWER initiative) will be discussed. An overview of existing enterprise solid state technology will be given, showing the kind of performance provided by the currently available enterprise SSD drives and future directions for non volatile memory based storage devices. We will continue with an overview of the enterprise spinning disk market. The techniques used by the hard drive manufacturers to satisfy the demand for ever growing capacities per spindle will be described. Finally, an analysis suggesting which type of drive(s) best fit our applications will be presented. The presentation will also include Intel's plans for expansions in the computing hardware: new Skylake microarchitecture, Omni-Path interconnect, non-volatile main memory, Intel Transactional Extension and Rack-scale architecture. It will also cover broadly the next generations of Intel co-processors: Knights Landing and Knights Hill. The Open Compute Project, OCP (http://www.opencompute.org), was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. After an initial evaluation of two OCP twin server enclosures in 2013, we decided to launch a tender for acquiring OCP compliant hardware comprising both CPU servers and storage. We will introduce the OCP designs currently available on the market, the advantages they provide and how OCP could fit with CERN's computing and storage needs in the future. We will finish by showing the current status of the server-side networking in the CERN Data Center, followed by an overview of the market trends and future changes for server networking

    Report on the parallelization of the MLfit benchmark using OpenMP and MPI

    No full text
    This report describes the development of an MPI parallelization support on top of the existing OpenMP parallel version of the MLfit benchmark for a hybrid evaluation on multicore and distributed computational hosts. The MLfit benchmark is used at CERN openlab as a representative of data analysis applications used in the high energy physics community. The report includes the results of scalability runs obtained with several configurations and systems

    Experience of public procurement of Open Compute servers

    No full text
    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT)

    Experience with procuring, deploying and maintaining hardware at remote co-location centre

    No full text
    In May 2012 CERN signed a contract witli tlie Wigner Data Centre in Budapest for an extension to CERN's central computing facility beyond its current boundaries set by electrical power and cooling available for computing. Tlie centre is operated as a remote co-location site providing rack-space, electrical power and cooling for server, storage and networking equipment acquired by CERN. Tlie contract includes a 'remote-lands' services for pliysical liandling of liardware (rack mounting, cabling, pusliing power buttons, ...) and maintenance repairs (swapping disks, memory modules, ...). However, only CERN personnel lave network and console access to tle equipment for system administration. This report gives an insiglt to adaptations of lardware arclitecture, procurement and delivery procedures undertaken enabling remote plysical landling of tle lardware. We will also describe tools and procedures developed for automating tle registration, burn-in testing, acceptance and maintenance of tle equipment as well as an independent but important clange to tle IT assets management (ITAM) developed in parallel as part of tlie CERN IT Agile Infrastructure project. Finally, we will report on experience from tlie first large delivery of 400 servers and 80 SAS JBOD expansion units (24 drive bays) to Wigner in Marcli 2013
    corecore